KALMAN RLTER PERFORMANCE DEORADATION
WfTH AN ERRONEOUS PLANT MODEL
LARRY BLAYNE NOFZISER
GERALD LEE DEVINS
LIBRARY
imVAL ?0B-1 T.R^vDUATS
liC-i iERtiY, CALI? • j«33‘^';0
n
OCV/t\tGnADED
APPROVED FOR PUBLIC RELEASE
KALMAN FILTER PERFORMANCE DEGRADATION
WITH AN ERRONEOUS PLANT MODEL
by
Larry Blayne Nofziger
Lieutenant Commander, United States Navy
B.S. in Electrical Engineering
Indiana Institute of Technology, 1954
and
Gerald Lee Devins
Lieutenant, United States Navy
B.S. in Engineering Electronics
Naval Postgraduate School, 1965
Submitted in partial fulfillment of the requirements
for the degree of
MASTER OF SCIENCE IN ENGINEERING ELECTRONICS
from the
NAVAL POSTGRADUATE SCHOOL
June 1967
ABSTRACT
This investigation is concerned with the effects of
employing a Kalman filter to estimate the states in a
system for which the mathematical model is inaccurate.
Consideration is given to both intentional and unintent-
ional mis-identif ication of parameters in the assumed
plant dynamics. An algorithm consisting of four matrix
equations is derived which yields the actual covariance
of estimation error when errors in the assumed model are
known. Depending upon the gain sequence used, the de-
rived equations can be used to either 1) produce optimal
estimates when errors are deliberate or 2) aid in the
determination of mis-identif ication costs in terms of
filter performance degradation if the relative accuracy
of parameter identification is known.
Analytic examples of scalar cases are included, as
well as computer simulations for specific higher order
systems , including the employment of a second order filter
model with a fourth order plant.
TABLE OF CONTENTS
DUDLEY KNOX LIBRARY
NAVAL POSTGRADUATE SCHOOL
MONTEREY CA 93343-5101
PAGE
Chapter 1 9
Introduction 9
The Plant Model 11
The Kalman Filter 12
The Problem Statement 15
Chapter 2 16
A Review of Recent Investigations 16
Chapter 3 20
The Problem Development 20
The Optimum Filter 21
The Suboptimum Filter 21
The Actual Covariance Matrix 23
The Recursive Calculation of Actual Covariance 26
The Sensitivity to Errors in Plant Dynamics 28
Examples 29
1. The Simple Amplifier 29
2. Integrator-Amplifier 32
3. Low Pass Filter 38
Chapter 4 42
Computer Simulations 42
a. Verification of Recursive Solution 43
b. Example of Two-parameter Sensitivity 45
c. Model Order Reduced by One 47
d. Model Order Reduced by Two
Chapter 5
Conclusions
Bibliography
52
52
56
3
LIST OF ILLUSTRATIONS
FIGURE PAGE
1-1 The Plant Model 13
1-2 Kalman Filter Operations 13
3-1 The Three Types of Calculated Covariance 22
3-2 The Simple Amplifier 30
3-3 Degradation for the Simple Amplifier 33
3-4 Integrator-Amplifier 34
3-5 Degradation for the Integrator-Amplifier 37
3- 6 Low Pass Filter 38
4- 1 Calculation of Filter Performance 44
Degradation
4-2 Degradation vs Parameter Error 46
(2-Parameter Model)
4-3 Degradation vs Parameter Error 48
(3-Parameter Model, 1 Incorrect)
4-4 Degradation vs Parameter Error 50
(4-Parameter Model, 2 Incorrect)
5
LIST OF SYMBOLS AND ABBREVIATIONS
Symbol
Dimensions
Meaning
A
n X n
Canonic matrix of state equation
coefficients
a
scalar
Amplification, model parameter,
amplification and feedback
coefficient
B
n X m
Input distribution matrix
D(k)
n X n
E{x (k)x'^ (k) }
E{x}
operator
Expected value of x
Go(k)
n X p
th
Optimal weighting for k observa-
tion
Gf (k)
n X p
Filter
II
H
p X n
Observability matrix
I
n X n
Identity matrix
J
scalar
Performance index, trace of
P(k/k)
K(k)
n X n
E{k(k/k)x'^(k) }
k
integer
Index or sequential stage
k/k
integers
th
"at k iteration given k s
ample s"
k+l/k
integers
"predicted at (k+1)^ iteration
given k samples"
m
integer
Number of inputs
n
integer
Number of states, order of
system
n X n
Actual covariance matrix (calculated)
Pf( )
n X n
Filter " "
II
n X n
Optimum " "
II
Pe( )
n X n
Ensemble average covariance
estimation error
of
P
integer
Number of states observed
Qf
n X n
r^E{u(k)u‘^(k)
7
°p
n
X
n
r E{u(k)u^ (k) }r^-'
p - - p
R
P
X
p
Measurement noise covariance matrix
u(k)
m
X
1
Excitation vector or input vector
v(k)
P
X
1
Measurement noise vector
X (k)
n
X
1
th
state vector at k sample
^(k/k)
n
X
1
Estimate of state vector given
k samples
x(k+l/k)
n
X
1
Predicted value of x(k+l) given
k samples
z(k)
P
X
1
th
Vector of observations at k
sample
a
scalar
General plant parameter
r^(T)
n
X
m
Filter transmission matrix
r (T)
n
X
m
Plant
p
scalar
Model parameter, damping factor
(T)
n
X
n
Filter state transition matrix
^ (T)
n
X
n
Plant
P
Q
m
X
m
Covariance matrix of input
excitation
U)
scalar
Model parameter, natural frequency
8
CHAPTER 1
INTRODUCTION
In recent years, a considerable portion of the litera-
ture in the field of automatic control has been concerned
with plant identification and state estimation. In most con-
trol problems, it is first necessary to establish a suitable
mathematical model of the process to be controlled in order
to perform any meaningful analysis or synthesis. Then, if
some sort of observation of the process is available, this
observation along with the mathematical model, and at least
a probabilistic description of the forcing function, pro-
vides the necessary information to implement an estimation
scheme which will give a measure of what the plant is doing
at the present time, has done in the past, or will do in
the future .
Whenever estimation is attempted with an inaccurate
mathematical model, the estimation accuracy must of neces-
sity deteriorate. The investigation reported here is con-
cerned with the degradation of estimation accuracy when an
erroneous model of the plant dynamics is employed.
At this point it is necessary to explain two reasons
for not using an accurate mathematical model in the esti-
mation scheme. The first is unintentional, a result of
the simple fact that the mathematical model which most,
accurately describes the plant is not known. A second pos-
sible reason might be the deliberate employment of a low-
order model of a more complicated plant. Because the
mathematical involvement of most estimation schemes is
9
inescapably tied to system order, much computational time
can be saved whenever the model order can be reduced. Such
reduction may be necessary for "real time" estimation, at
the expense of estimation accuracy. This is assuming of
course that in the problem at hand, estimates of the higher
order states are not needed.
The three "tenses" of estimation mentioned above are
known as filtering, smoothing, and prediction, respectively.
In current practice "what the plant is doing" is described
mathematically by a state vector, the components of which
represent the minimum number of entities required to com-
pletely describe the condition of the plant. As an example,
if the plant were a passive electrical network, the re-
quired state vector components could be inductor currents
and capacitor voltages .
This investigation was restricted to a discrete-
sampled-data description of the plant and estimation scheme.
The use of this mathematical framework leads to a sequen-
tial filtering scheme. In this technique of estimation, as
in most, a weighting is given to each observation according
to how much new information it gives relative to that
already received. In current practice this weighting or
"filter gain" is calculated to minimize (or maximize) some
performance index which has previously been defined. When
linear operations on the data are employed and the index to
be minimized is mean squared estimation error, the resulting
estimation scheme is called a Kalman Filter. [1, 5].
10
The remainder of this Chapter includes the development
of the plant model, a brief review of Kalman filter equations
and a statement of the problem to be investigated. In Chap-
ter 2, the results of some other recent investigations are
discussed. In Chapter 3, a set of recursive equations for
finding a measure of estimation degradation is derived,
followed by three simple examples. Results of digital com-
puter simulations using the derived equations for several
examples are presented in Chapter 4 . Chapter 5 consists of
comments on the results of the computer simulations.
THE PLANT MODEL
Mathematical formulation of the problem proceeds from
the assumption that there exists a set of linear, constant
coefficient, first order differential equations which
adequately describe the plant, or message generating pro-
cess. These are of the form
X = Ax + Bu (1-1)
Where x is the state vector of n components for an nth order
system, u is the m x 1 input vector, and A is n x n, B is
n X m, both matrices of constants. The sampled data matrix
difference equation which gives response at sampling in-
stants becomes
x(k+l) = 4>(T)x(k) + r(T)u(k) (1-2)
where 4>(T) is the n x n discrete state transition matrix,
r (T) is the n X m input distribution matrix and u(k) is a
sampled and zero order held input vector. Constant differ-
ential equation coefficients are not necessary, but are used
here for simplicity. The magnitude of each component of u(k)
11
is assumed to be a normally distributed random variable
with zero mean and known variance [5] . The observations of
the system states are assiimed to be contaminated by additive
gaussian white noise of zero mean and known variance. In
matrix notation the observation vector £ at the kth samp-
ling instant is given as
£(k) = Hx(k) + v(k) (1-3)
where H is the p x n observation matrix, here assumed known
and constant and v(k) is the p x 1 vector of additive meas-
urement noise. A block diagram depicting the above con-
ditions is shown in Figure 1-1. The double lines represent
vector signal flow.
THE KALMAN FILTER
The sequential estimation technique developed by
R.E. Kalman and expanded by others, takes the plant descrip-
tion as defined in the preceding section, and produces an
estimate ^(k/k) of the state vector x(k) at the kth
iteration given k observations. This estimation scheme is
commonly called the Kalman filter and can be described
mathematically as
5^(k/k) = $(T)^(k-l/k-l) + G(k) [z(k) - H$ (T) ^(k-l/k-1) ]
(1-4)
where G(k) is the filter weighting or gain applied at
•h
the k^ iteration. This gain is calculated to minimize the
scalar performance index
J = E{ [x (k) -^(k/k) ] [x (k) -^(k/k) ] } (1-5)
i.e., the mean square estimation error.
12
?iG. 1-1 Tiie i-'laiit 1-Iodsl
x(lc/ic)
1-2 Kalman Filter Operations
-13
The calculation of G(k) is facilitated by defining a
matrix of covariances of estimation error as
P(k/k) s E{ [x(k)-x(k/k) ] [x(k)-^(k/k) ]^} (1-6)
The trace of P(k/k) is of course simply J. With the further
definition ,
P(k+l/k) E E{ [x(k+l)-^(k+l/k) ] [x(k+l)-^(k+l/k) ]^}
(1-7)
the Kalman sequential equations can be stated as;
P(k+l/k) = $(T)P(k/k)$'^(T) + Q(k) (1-8)
G(k+1) = P(k+l/k)H'^[HP(k+l/k)H'^+R]"^ (1-9)
P(k+l/k+l) = [I-G(k+l)H]P(k+l/k)
- P (k+l/k)H'^G'^(k+l)
+ G(k+1) [HP(k+l/k)H'^+R(k) ]G'^(k+l) (1-10)
Equation 1-10 can be reduced to
P(k+l/k+l) = [I-G(k+l)H]P(k+l/k) (1-11)
where
R(k) E E { V (k) (k) }
Q(k) E r (T)E{u(k)u'^(k) }r'^(t)
E {• } = the expectation operation
T
( ) = the transpose operation
( ) the matrix inversion operation
I = the identity matrix
R(k) is a p X p diagonal matrix based upon a priori know-
ledge of the average measurement noise power. Q(k) is an
n X n matrix containing similar a priori information on the
random excitation. Note that for the single input case,
E{u(k)u (k) } is a scalar which has been given the symbol Q
in the development to follow. Under assumptions of
14
stationarity of input excitation and measurement noise
statistics, is a constant and R is a constant matrix.
For a scalar observation R is also a scalar. It is further
assumed that excitation and measurement noise are statis-
tically independent. A block diagram of filter operations
is shown in Figure 1-2 .
The derivation which leads from the definitions of
P (k/k) and P(k+l/k), i.e. equations 1-6 and 1-7, to the
recursive equations 1-8, 1-9, and 1-11, has been done in
many ways by many authors since 1960 and will not be re-
peated here [4, 7]. However, it will be shown that the
recursive equations to be derived in Chapter 3 which
account for filter degradation, reduce to the original
Kalman equations when plant and filter models coincide,
and the gain matrix is computed so as to minimize estima-
tion errors .
THE PROBLEM STATEMENT
The problem under consideration can now be stated as;
Given a plant most accurately described by equations 1-2
and 1-3, what filter performance degradation results from
the implementation of the Kalman filter equation 1-4, when
the gain sequence calculated using equations 1-8, 1-9 and
1-11 is based on a model of the plant which is incorrect
in its representation of the plant dynamics?
15
CHAPTER 2
A REVIEW OF RECENT INVESTIGATIONS
The practical difficulties encountered when attempting
to identify a correct (or "best") mathematical model of a
plant or process to be observed are not treated here. It is
assumed that the identification has been done but is subject
to errors or inaccuracies. The Kalman filter performance
may be degraded by errors in any of the several quantities
used in the calculation of the weighting G(k). (See
equations 1-8, 1-9, 1-11). Numerous investigators have
considered this problem; some of their results are summar-
ized and commented upon below. Methods for practical
implementation or error analysis have been included in some
cases .
In 1964 Fagin reported a generalized error analysis
which included recursive equations for computing the incre-
mental change in the covariance matrix when the filtering
is done with an incorrect state transition matrix, and in-
correct a priori noise statistics are used in computing
gain [2] . The analysis allows a time varying observability
matrix and sample interval. The assumed form of the plant
in Fagin 's investigation is enough different from the form
assiamed here that no attempt has been made to modify his re-
sults to fit the framework of the problem given in Chapter
1. A rough interpretation of those results using the
notation of this paper, would be the effect of errors in
4>(T),Q(k) and R.(k) matrices.
16
The recursive equations must be provided with starting
values. The estimate ^(0/0) must be provided as well as
P(0/0) for the first gain calculation. Whenever possible,
the values used for ^(0/0) should be typical of what might
be expected for the first observation. For instance if
the output state of a system is thought to have zero mean,
ii^(0/0) should be set to zero. The initial covariance
matrix P(0/0) must reflect some level of confidence in
the initial filter state. Nishimura has defined an error
matrix which is the difference between the actual covariance
and that calculated by the filter [8] . He has shown that
if the error matrix is non-negative definite, the actual co-
variance of estimation error is bounded by the covariance
that is calculated using the Kalman recursive equations
for the optimum filter, i.e., equations 1-8, 1-9, and
1-11. This suggests that the trace of P(0/0) be given
large values to ensure that the trace of the error matrix
is non-negative. If application is restricted to system
models which are fixed and uniformly completely observable
and controllable, in the control theory sense, then accord-
ing to Kalman, the calculated covariance matrix will con-
verge to some constant matrix after enough samples [6] .
The number of iterations required to reach steady state
also depends on P(0/0). If this matrix is initialized
with overly "pessimistic" values to satisfy Nishimura 's
stability condition, the filter may take too long to reach
steady state in a given application. Therefore, when filter
"settling time" is critical, the initialization of P(0/0)
17
requires some additional knowledge of the variance of the
plant states so that stability may be ensured without
unnecessarily increasing settling time. For this investi-
gation the initial filter state ^(0/0) is set to the same
value as the plant initial vector and P(0/0) is set to the
zero matrix.
Normally stationary statistics are assxamed for the
input excitation and measurement noise, making and R
constant matrices. Errors in these quantities directly
affect the elements of the steady state calculated co-
variance matrix. In 1966, Heffes reported on the effects
of both incorrect initial covariance matrix and incorrect
noise statistics in the model [3] . He includes recursive
expressions for calculated covariance and gain matrices
based on the false values. Results of a computer simula-
tion of a numerical example showed the variance of the
first two states of a third order system as calculated
from the equations was always larger than that actually
being attained, the latter of course being still greater
than the optimum, given the correct model. In order to
better isolate the effects of erroneous identification of
plant dynamics, and R will henceforth be assumed known
and constant.
During this investigation, the authors became aware
of very similar work being performed by S.R. Neal at NOTS
China Lake. When they are published, comparison of
mathematical results of the two investigations should re-
veal only differences in notation and assumption on the
18
form of the plant model. Both consider plant dynamics as
being misidentif ied in some way, resulting in errors in the
assumed state transition matrix. By isolating this type of
error, it may be possible to learn which of the types of
error discussed in this chapter would have the most degrading
effect on filter performance in a given application. Per-
haps greater emphasis could then be placed on elimination
of certain types of error when establishing the system
mathematical model .
The effects of any of the identification errors dis-
cussed above can be found by producing a set of recursive
expressions which will produce the matrix of actual error
covariance P (k/k) and then comparing this with the
di
covariance matrix P (k/k) produced by using the normal
o
Kalman equations. Another very important comparison is
that between P^(k/k) and the optimum result P^(k/k)
obtained when the plant model is known exactly. The
difference between these last two quantities gives the
true "cost" of plant mis-identif ication in terms of
variance of estimation error, and is the measure of degrad-
ation used in this paper. These quantities are formally
defined and a set of equations is developed for P (k/k)
in Chapter 3 .
19
CHAPTER 3
THE PROBLEM DEVELOPMENT
As the first step toward a solution to the problem of
filter performance degradation, a measure of degradation
must be formally defined, along with the various covariances,
according to the manner in which they were obtained. These
definitions are as follows?
The measure of filter performance degradation to be
used is defined as the trace of the difference matrix AP
where
AP = P (k/k) - P (k/k) (steady state) (3-1)
P (k/k) is a n X n matrix, the elements of which are
3
the covariance values of actual estimation error produced
by the filter when a given (and possibly suboptimal) gain
sequence G(i), i = 0,1,2, k is used in the filter
equation 1-4, and there has been mis-identif ication of
plant dynamics. The recursive equations to be developed
in this chapter will produce P (k/k) .
P (k/k) is a n X n matrix of the covariance values of
o
estimation error which results when the optimum gain
sequence G^(i), i = 0,1,2, k, is used in the filter
equation 1-4, and there has been no mid-identification of
plant dynamics. Equation 1-11 produces P^(k/k) provided
there are no identification errors as discussed above.
The third quantity to be defined is P (k/k) . This is
a matrix of the covariance values of estimation error re-
sulting when a given (and possibly suboptimal) gain se-
quence G(i), i = 0,1,2, k, is used in the filter
20
equation 1-4, and there has been mis-identification of
plant dynamics. P (k/k) is a square matrix of the same
o
dimensions as the order of the filter model. Equation 1-11
produces P (k/k) provided there are identification errors
as discussed above. The means of producing the three quan-
tities defined above are diagrammed in Figure 3-1.
THE OPTIMUM FILTER
An estimation problem that can be fitted to the Kalman
filter framework is solvable by use of equations 1-4, 1-8,
1-9, and 1-11. This requires that the plant be perfectly
described by equations 1-2 and 1-3. However, when the
filter employs an incorrect model, the recursively calcu-
lated covariance (equation 1-11) is no longer optimum, so
the performance index as given by equation 1-5 no longer
applies. The objective of minimizing the mean squared
error is still valid but the mean square error now becomes
the trace of the actual covariance matrix of estimation
error, P (k/k) . To distinguish the types of covariance
a
mentioned thus far, subscripts have become necessary. In
the derivation of P (k/k) which follows, the subscript f
denotes filter quantities while the subscript p refers to
the most accurate mathematical model of the plant or system.
THE SUBOPTIMUM FILTER
Suppose the matrix difference equation giving the
response of a discrete system at sampling instants has
been identified as
x(k+l) = 4>^(T)x(k)+r^(T)u(k) (3-2)
21
Pig. 3-1 The Three Types of Calculated Covariance
22
when the most accurate description of the same system is
given by,
x(k+l) = $ (T)x(k)+r (T)u(k) (3-3)
- P - P -
where
$.(T)+6$(T) = $ (T) ; r.(T)+6r(T) = T (T) (3-4)
t p f p
Assxime the observation in either case is given by
2 (k) = Hx(k)+v(k) (3-5)
If the Kalman filter equations 1-8 through 1-10 were to be
used, the filtering would be suboptimal, i.e., equation 1-5
would not be minimi2ed. This is readily seen by noting
that equation 1-8 becomes
P(k+l/k) = 4>^(T)P(k/k) 4>^(T)+Q^ (3-6)
and being independent of observed data, cannot reflect
errors in $(T). The calculated gain which depends upon
equation 3-6 would therefore be suboptimum.
THE ACTUAL COVARIANCE MATRIX
The errors 64>(T) and 6T(T) are taken into account as
follows :
Assume a plant has been misidentif ied as in the last
section, equation 3-2, and a Kalman filter applied. The
filter equation would be;
^(k+l/k+1) = 4'^(T)^(k/k)+G(k+l)[Z(k+l)-H <I>^ (T) ^(k/k) ]
(3-7)
If this equation and the correct plant description, equation
3-3, are substituted into the appropriate quantities of the
definition for P(k/k), equation 1-6, the resulting expres-
sion becomes P (k/k) . This can be shown as follows (reducr-
a
ing the index by one and dropping T from $(T) and T (T) ) ,
23
x(k)-^(k/k) = $ x(k-l) +r u (k-1) -$^£(k-k/k-l)
P P ^
-G(k) t£(k) -Hf^x(k/k) ]
(3-8)
but
f = $_+64>
P f
and
for convenience define
"k
f^-G(k) =
(3-9)
6<I>-G(k)H6$ = 6$*
•k
(3-10)
r -G(k)Hr =r
p p p
(3-11)
then after some manipulation, equation 1-6 becomes,
P^(k/k) = $*P^(k-l/k-l)
+'J>*E{x(k-l)x'^(k-l)
-$*E{k(k-l/k-l)x'^(k-l)
+ 6'J>*E{x(k-l)x'^(k-l)
-6<I>*E{x(k-l)^'^(k-l/k-l)
+ 6$ Efx (k-1) }6f ^
+ r*E{u(k-l)u'^(k-l) }r*'^
p - - p
+G (k) ETv (k) (k) >g'^ (k) (3-12)
Now, taking the definition, equation 1-7, reducing the
index by one and noting that
k(k/k-l) = <J>^^(k-l/k-l) (3-13)
then upon substitution of appropriate quantities, equation
24
1-7 becomes
P^(k/k-l) = $^P^(k-l/k-l)
+$^E{x(k-l)x'^(k-l) }6$'^
-$^E{x(k-l/k-l)x'^(k-l) }6$'^
+ 6$E{x (k-1) x'^ (k-1)
-6 $E{x (k-1) x"^ (k-l/k-1)
+ 6fE{x(k-l)x'^(k-l)
+r E{u(k-l)u'^(k-l) }r (3-14)
p - - p
Comparison of equation 3-12 with equation 3-14 and
the use of the definitions in equations 3-9, 3-10, and 3-11,
reveals that
P^(k/k) = P^ (k/k-l)-G(k)HP^ (k/k-l)-P (k/k-1) h'^g'^ (k)
a a a- a
+G(k) [HP (k/k-l)H'^+R]G'^(k) (3-15)
a
Kalman has shown that if gain is calculated from equation
1-9, the trace of the right hand side of equation 3-15 is
minimized, and the recursive equation 1-11 is obtained. It
can be concluded here that given a known error 6$, the
recursive equations which would be used for minimum variance
estimates would be 1-9, 3-7, 1-11 and 3-14.
That is to say, if a Kalman filter is to be applied
with a known error 6$ in the model of plant dynamics, then
minimum variance estimates can still be produced, provided
the error 6 4> is taken into account by use of equations 1-9,
3-7, 1-11 and 3-14. Except for the case of intentional
25
mis -mode ling for the sake of order reduction, the f-rrox 6<1>
would of course be used to correct <I>^ and the original
optimal Kalman filter equations 1-8, 1-9 and 1-11 would be
used .
THE RECURSIVE CALCULATION OF ACTUAL COVARIANCE
Equation 3-15 is entirely suitable for use as a recur-
sive expression for computer simulation. However, equation
3-14 must be adapted from its present form to one which
avoids explicit use of the expectation operation. The
approach taken was to define the matrix quantities
D(k) = E{x(k)x'^(k) } (3-16)
K(k) S E{^(k/k) x"^ (k) } (3-17)
Matrix algebra and the advance of index yields (from equa-
tion 3-14)
P(k+l/k) = <I>^P(k/k)4>^'^+4)^D(k)6f^-<J)jK(k)6$'^
+ 6$'^D'^(k) 4>^'^-64'K^(k) 4>.'^+64'D(k) 6$"^+Q (3-18)
r r p
Equation 3-18 is in usable form, but requires recursive ex-
pressions for D(k) and K(k). These are obtained from the
definitions (equations 3-16 and 3-17) :
D(k+1) = Efx(k+l)x^(k+l) }=E{ t4> x(k)+r u(k)][<J> x(k)+r u(k)]'^}
- - p- p- p- p-
(3-19)
Expanding the right hand side and noting that u(k) and x(k)
are uncorrelated,
D(k+1) = 4> E{x (k) x'^ (k) }4» "^+r E{u (k) u"^ (k) }r
P-- PP -- p
= 4> D(k) 4- '^+Q (3-20)
P P P
26
Similar manipulations with the definition of K(k+1) yield
K(k+1) 5 E{k(k+l/k+l)x'^(k+l) }
= [I-G(k+1)H] 4>^K(k) 4> ’^+G(k+l)HD(k+l) (3-21)
f P
The iterative expressions derived are now summarized in the
proper order for calculation:
P(k+l/k) = $^P(k/k)$^'^+'l>^D(k) 6$‘^-<I>^K(k)
+ 6$D(k) $^‘^-6fK‘^(k) <I>^‘^+6$D(k) 6$“^
+Q (3-18)
P
G(k+1) = P(k+l/k)H'^[HP(k+l/k)H'^+R] (1-9)
P(k+l/k+l) = P(k+l/k)-G(k+l)HP(k+l/k) -P(k+l/k)H'^G'^(k+l)
+G(k+1) [HP(k+l/k)H'^+R]G'^(k+l) (3-15)
D(k+1) = $ D(k)$ '^+Q (3-20)
P P P
K(k+1) = [I-G(k+1)H] $.K(k) $ "^+G (k+1) HD (k+1) (3-21)
t P
Several comments on the appearance of equation 1-9 in this
list are appropriate at this time. First, if the plant is
correctly identified i.e., 6$ = 0, then it is
obvious that equation 3-18 reverts to 1-8 and equation 3-15
is of course equation 1-10. Equations 3-20 and 3-21 would
still exist but would not be used in 3-18, therefore stan-
dard Kalman filtering results. Second, if the plant is
mis-identif ied, the use of equation 1-9 in the order shown
will produce the set of minimum variance estimates to be
used in the case of order reduction mis-modeling , mentioned
above. Third, if any other gain sequence is produced ex-
ternally to equations 3-18, 3-15, 3-20 and 3-21 then
27
equation 3-15 will give the actual covariance of esc^iuarion
error that would result when the gain sequence supplied is
utilized in the Kalman filter equation 1-4.
THE SENSITIVITY TO ERRORS IN PLANT DYNAMICS
One of the original objectives of this investigation
was to find an analytic expression for the sensitivity of
the performance index J to plant identification errors.
This would be of the form.
dJ = -1^ da- + da„ + •
where is one of the plant parameters subject to mis-
identif ication. The development of such an expression
would involve finding total differentials for each of the
trace elements of (steady state) and then adding to get
d J . If the filter is stable and P^(k/k) eventually reaches
a constant value, an implicit expression for the steady
state covariance is easy to obtain by setting P^(k+l/k+l)
equal to P^(k/k) . The difficulty lies in the amount of
algebra involved when the system order is two or greater.
Each partial derivative of an element of the covariance
matrix is a function of all the other elements.
To then find partial derivatives of the steady state
covariance matrix trace elements, P (k/k) is considered
along with its definition,
P(k/k) = E{x(k)x'’^(k) -^(k/k)x'’^(k) -x(k)«'^(k/k)+^(k/k)^'^(k/k) }
= D(k) -K(k) -K'’^(k)+E{i^(k/k)i^'^(k/k) }
28
The sum of the terms on the right hand side reaches a con-
stant or steady state value. Moreover, it can be shown
that in a stable system in which $ (T) ^ I each of the terms
in the sum becomes constant. For example, in the stable
time invariant plant with feedback, the average "power" in
the states D(k) becomes a constant times the driving "power"
Therefore, D(k+1) is set equal to D(k); an implicit
9 D
function is obtained and ij , where a is a plant parameter
n 9P. .
ID
can be found. The partial derivative will be the sum
of the similar quantities on the right hand side of equation
3-22. The procedure is straightforward, but the amount of
algebra is prohibitive. No better method was found.
EXAMPLES
The examples which follow will serve to demonstrate
how rapidly algebraic complication can arise with slight in-
creases in plant complexity. All are scalar cases, making
the performance index J equal to the steady state covariance
P. For the two simplest examples a sensitivity function is
calculated, as well as the actual filter degradation ex-
pression. For the low pass filter example, a means of
obtaining the sensitivity function is discussed, but it is
not done. In that example only the degradation expression
is included as a function of the plant parameter.
Example 1: A Simple Amplifier
Consider the plant shown in figure 3-2. The state x
at the kth sampling instant is given as x (k) = a u (k) .
By comparison with the usual state space discrete notation
29
it can be seen that $(T) = 0; I (T) = a
The kth observation is written as Z (k) - x(k)+v(k)
Suppose that optimal estimates ^ (k/k) of the state x are
required. Then application of equation (1-4) yields
k(k/k) = G(k)Z(k) (3-23)
V (k)
Fig. 3-2
The Simple Amplifier
Applying equations (1-8) , (1-4) , and (1-11) for optimum
filtering, the recursive sequence becomes
Filter gain: G(k+1) = (3-24)
a
2
Ra
Error Covariance (variance) : P (k/k) = — ^ (3-25)
a
2
Conditional Error variance: P(k+l/k) = Q=a fi(3-26)
Substitution of the expression for gain into equation 3-23
yields
k(k/k) = ^ — Z(k) (3-27)
a f^+R
30
Recalling that fi is the variance of the perturbation and R
. . 2
IS the variance of the measurement noise, the quantity a
could be thought of as the average signal "power" and R as
the average noise "power" , making the optimal weighting
signal power
signal power + noise power
which satisfies intuition for the case of observing a signal
in noise.
The sensitivity function for this example can be found
easily by differentiating equation 3-25 with respect to the
plant parameter.
dP
da
2R^^^a
(a^fi+R) ^
(3-28)
Now suppose that the true plant is as shown in figure
3-2, but that the amplification has been incorrectly identi-
fied as a^ where a=a^ + 6a. Application of the Kalman
filter equations then gives a calculated gain
G = (3-29)
a ,fJ+R
® f
If this gain is used to estimate x, the degradation due to
misidentif ication can be found as the difference between the
covariance resulting from using equations 3-18 and 3-15 and
the optimum value. Substitution of into equations 3-18
and 3-15 yields
P (k/k) = fl+Ra ) (3-30)
(a^^fi+R) ^
Note than when a^= a this reduces to equation 3-25, as re-
quired. The degradation in performance P is therefore
31
obtained by subtracting the right hand side of equacj.or. 3-2 5
from P (k/k) as given in equation 3-30.
Degradation AP = - P^
AP
„ 2 „ 2 , 2 2.2
R (a -a^ )
(a^^+R) ^ (a^fi+R)
(3-31)
Figure 3-3 shows degradation in the performance index J as
a per cent of the optimum versus percentage error in the
plant parameter. The values used were
R = .01; Q = 1.0; a^ = 20.0
Degradation in this example is very slight; an error of 30%
in identification degrades the filter performance by only
.00133%. This may be partially explained by noting that the
degradation function shown is approximately directly propor-
tional to the square of the measurement noise variance,
and that a very low value was chosen for R. Nevertheless,
it can be concluded that the Kalman filter performance is
not very sensitive to plant identification in this applica-
tion .
Example 2. Integrator-Amplifier
As another example of only slightly greater difficulty,
consider the plant described by the transfer function
X (s)
u ( s)
a
s
(3-32)
Figure 3-4 shows the discrete representation of this plant.
32
St “ SI -p
^ X 1 oo>
af
Pig. 5-3 Degi-adation for the Simple Amplifier
33
v(k)
u(k) ^
hOj _x (k+l_)
x(
k)
aT
Delay
* 0 - 1 -
Z(k)
Fig. 3-4
Integrator-Amplifier
The difference equation describing the response at sampling
instants is
x^k+l) = x(k) +aTu(k) (3-33)
with the observation again consisting of a single state plus
noise. Examination of equation 3-33 reveals that
4>(T) = 1; r(T) = aT
Q = (3-34)
The Kalman filter equations 1-8, 1-9 and 1-11 become,
respectively
P(k+l/k) = P(k/k)+Q (3-35)
- g!k - !^i - /glR
The steady state covariance can be found by equating
P (k+l/k+1) to P(k/k). As in the previous example, this
is optimum filtering when identification of the plant
34
parameter a is perfect. The resulting covariance is the
solution to a quadratic equation, viz..
= I [/Q"+-1RQ-Q) (3-38)
Substitution for Q yields
Po = ^[/r2^a‘*T‘^+4Rfia^T^-fta^T^] (3-39)
Sensitivity of the steady state optimum covariance to the
plant parameter a becomes
dP
da
|-fia^T^+2R
/ft^a‘*f‘*+4Rfta^T^
(3-40)
The sensitivity function was easily obtained in this example,
and could be used to determine degradation for small pertur-
bations in the parameter a.
As in the previous example, it is now assumed that a^
was used as the amplification value in the filter model,
and the gain sequence resulting from the misidentif ication
is known. The final value of the filter gain could be
found by manipulation of equations 3-35, 3-36 and 3-37 to
yield
Gf = ^[v^^+lRQ^-Qf] (3-41)
This is the steady state value of the gain sequence used for
the erroneous filter models and therefore can be used to find
the actual value of steady state covariance. The actual
steady state covariance is again found by proper substitu-
tions in equations 3-18 and 3-15 to be
P +
a
(l-G^^Qp+G^^R
(3-42)
35
Comparison of equations 3-41 and 3-38 reveals rhat ii
identification were perfect, the optimum steady state o,,-
variance would be
P = G R (3-43)
o o
The degradation due to identification error then becomes
AP
AP
P -P
a o
(1-G^) ^Qp+Gg^R(l+G^)-2G^G^R
2G^-G,,^
(3-44)
From equation 3-34
Q = (2a^T^ (3-45)
p
If equation 3-45 is substituted for in equation 3-41 the
final value of the optimum gain sequence results
G == ^t/ji^a‘’T‘*-i-4RQa^T^-fia^T^ ] “ (3-46)
O /R
Again using equation 3-34 to obtain and substitution the
result into equation 3-41 yields
Gf = /Q2a^‘'T‘* + 4Rf2a^^f^-fta^^T^] (3-47)
Substitution of equations 3-45, 3-46, and 3-47 into equation
3-44 gives the degradation as a function of a and a^. It can
be shown that equation 3-44 becomes zero as required, when
G^ = Gj. A graph of equation 3-44 is shown in figure 3-5.
The constants used were
R = .01; = 1.0; a^ = 20.0
As in the previous example, degradation is not very great
with considerable errors in identification, for the a priori
noise statistics chosen.
36
SL
X 100^
Pig. 3-5 Degradation for the Integrator-Amplifier
37
A final comment on this example is tnat althou-jt che
assumption that the individual terms on the right hand iide
of equation 3-22 become constant does not hold for this
plant, the results obtained by using equations 3-15 and 3-18
are still valid. This will be the case whenever $(T) is the
identity matrix or unity as it is in this example.
Example 3. Low Pass Filter
The plant shown in Figure 3-6 represents a more meaning-
ful example and is not too complicated for algebraic analysis.
This is the discrete model for the continuous system trans-
fer function.
X ( s) _ a
u(s) st-a
V (k)
Fig. 3-6
Low Pass Filter
38
The difference equation describing the plant is
x(k+l) = e ^'^x(k) + (l-e ^'^)u(k) (3-48)
therefore $(T) = r (T) =
The observation is again
z(k) = x(k) + v(k)
The plant parameter is again a.
As in the previous example, it will be assumed that mis-
identif ication has resulted in an erroneous filter model,
i .e . ,
$^(T) = e'^f"^; r^(T) = l-e'^f"^
It is further assumed that the gain calculation is based on
the Kalman equations, resulting in the following steady
state gain:
(3-49)
_ tf'VOf
tf^Pc+Q,+R
Now the effects of the erroneous identification can be found
by using equations 3-18, 3-15, 3-20 and 3-21, along with the
gain as given in equation 3-49. However the steady state
value of P(k/k) is required which for this example is the
solution of a quadratic scalar equation. From the Kalman
equations it can be shown that
Pc = 2^2 [R4>f^-Qf-R+/TQp^R=MJ^T^+lRQj^^] (3-50)
Substitution of equation 3-50 into 3-49 yields an expression
for Gj in terms of plant variables only.
^+4RQ $ 2
Gj = — (3-51)
R$f ^+Qf+R+/(Qf+R-R$f ^T^ + 4RQf<I>f
39
dP
It should be noted that a sensitivity function ^ could
have been obtained by solving for P(k/k) in equation 3-49
substituting for the steady state gain from equation 3-51,
and then forming ^^c , , ^^f and ^^f . However, the
9Q^ f 9a 9 a
expressions obtained are unwieldy and reveal little insight
into the problem of filter degradation. The sensitivity
function approach has the further limitation of small param-
eter variations whereas the application of equations 3-18,
3-15, 3-20, 3-21 does not. If the gain as given by equation
3-51 is used for the filter, equations 3-18, 3-15, 3-20,
3-21 give the conditional covariance of estimation error as
the following
P^(k-HA) = $^^P^(k/k)K$2_$2) D(k)-2$^(fp-$^)K(k)+Qp
(3-52)
Again, it is obvious that when plant model and filter model
coincide, the result is the Kalman equation for conditional
covariance. Proceeding to the expression for the steady
state covariance of estimation error, one obtains
1- 2G^+G - ^
= r io r f (4>
a. p r a tprap
(3-53)
Where D and K are the steady state values of E{x^} and
a a
E{xii} respectively. These are found by equating the values
t h th
at the (k+1) iteration to those for the k iteration as
follows :
D(k+1) = 4> ^D(k)+Q ; D = ^ P ...
p a l-$ '
(3-54)
40
K(k+1) = K(k)+G.D(k+l) ; K
i I p I 3i
(3-55)
Substituting equations 3-54 and 3-55 into 3-53 one obtains
an expression for the actual steady state covariance in
terms of the gain
Where G^ is the steady state gain obtained by using Kalman
equations with the correct model, as in equation 3-51.
Making substitutions for G^ and G^ the degradation in
filter performance can be found as
AP = P - P
a o
i.e., equation 3-56 minus equation 3-57.
P =
a
(3-56)
Equation 3-56 shows that when the expression for
optimum covariance would be
(3-57)
41
CHAPTER 4
COMPUTER SIMULATIONS
The great increase in complexity of sensitivity
functions which accompanies the slightest increase in
system complexity was readily evident in Chapter 3.
Even a scalar case such as the low pass filter with a
single pole produces unwieldy algebraic expressions for
sensitivity. For systems of second order or higher, it
appears to be more advantageous to perform a computer
simulation of some specific case. This portion of the
investigation was performed on the CDC 1604 Digital
Computer and consisted of four parts. The first was a
verification of the algorithm derived in Chapter 3
(equations 3-18, 3-15, 3-20 and 3-21). This algorithm,
while ostensibly accurate, provides numerous opportunities
for error in its implementation. The remaining simu-
lations were investigations of specific examples to
test the utility of the recursive solution in actual
problems. The desired end result was a means of knowing
the degradation of filter performance as a function of
error in one or more plant parameters, given the filter
operating parameter values. Such information, along with
the knowledge (or an estimate) of the accuracy of the
filter model parameters, could be useful in deciding
whether more (or less) accurate identification is called
for. For example, assume the model for a second order
system uses a damping factor and a natural frequency
42
which through some previous error analysis are known
to be accurate within 10 per cent. A look at the steady
state solution obtained from the recursive equations
based on ten per cent errors will provide the actual
degradation in filter performance if the plant parameters
lie on the tolerance limit.
For this example, suppose that this amount of
degradation from optimum is incompatable with the
estimation accuracy requirements . By examining the re-
sults for various lower parameter errors it will become
apparent to what accuracy the parameters must be iden-
tified. A flow chart for this type of investigation is
shown in Figure 4-1. Given a gain sequence, an estimate
of parameter error, the filter model and the correct fi,
R, and H matrices; the quantities cj)^, 4)^, , and
can be found and used to implement a recursive sequence
of equations 3-18, 3-15, 3-20 and 3-21. In all examples
which follow, the filter model employs correct initial-
ization and accurate fl and R matrices. All are single
input systems with only one observed state making and
R scalars, with values taken as 1.0 and 0.01 respectively.
a. VERIFICATION OF RECURSIVE SOLUTION
Equations 3-18, 3-15, 3-20 and 3-21 of Chapter 3
were verified by comparing the actual steady state
covariance matrix trace with that obtained by driving
a simulated plant, observing the entire state vector
and computing the quantity [ (x - ^) (x - x) ] . This was
43
Model Parameters Parameter Errors
Degradation due to Aa ' s
Fig, 4-1 Calculation of Filter Performance Degradation
44
done for each filter-plant combination for 1000 different
random sequences of driving and measurement noise, pre-
serving the average values at each successive iteration.
At least 30 iterations at steady state were used. The
ensemble averages were then averaged in time, giving
30 samples from which hypothesis testing could be done.
The entire procedure above was repeated for numerous
points including from zero to 20 per cent errors in each
plant parameter in order to verify that no programming
errors existed in the calculation of actual steady state
covariance .
b. EXAMPLE OF TWO-PARAMETER SENSITIVITY
Next, a numerical example of two-parameter sen-
sitivity was performed using the method outlined in
Figure 4-1. The second order model was chosen to be of
2
the form __f (4-1)
s^ + 2'i;^W^S +
with filter parameters and taken as Cos(tt/4) and
10.0, respectively. The plant was assumed to be driven
and sampled at 0.1 second intervals, with initial con-
ditions zero, and with xx the only observable state.
The plant parameters ? and w were varied from zero to
P P
+50% of those used by the filter model. For each set of
plant parameters, the difference between the actual
covariance trace and that which could be obtained if the
filter matched the plant is computed and stored. The re-
sulting values are points on a bowl-shaped surface which
45
100 %
50
Fig. 4-2 Degradation vs Parameter Errors
(2-paraimeter model)
o o
are then contoured by linear interpolation into the
plane. Figure 4-2 shows the resulting contour map. The
contours are marked as a percentage degradation from the
optimal trace as a function of the percentage error in
the filter parameters. Such a graph could assist not
only in the type of decision mentioned earlier, but also
in a determination of which direction of error is more
costly by considering the "gradient" of the surface in
the various directions in the parameter plane.
C. MODEL ORDER REDUCED BY ONE
The case in which a third order system with a
complex conjugate pair of poles and a remote real pole
is to be filtered by a second order model was also
considered as a numerical example. The erroneous filter
model was based on the plant transfer function
X(S) ^ 2 (4-2)
U(S) s^ + 2s + 2
While optimum filter used the model
X(S) = 2a (4-3)
U(S) (s^ + 2s + 2) (s + a)
where a was allowed to vary from 50 to 0.5 in increments
of 0.5. Both models are type 0 with the same complex
poles, since this is considered to be a case of deliberate
misidentif ication.
The recursive matrix equations were made compatible
by the addition of a row and column of zeros in the state
transition matrix for the filter and used as before.
47
Pig. 4-3 Degradation vs Parameter Errors
(3 parameter model, 1 Incorrect)
48
Another minor difference is that only the upper two
elements of the diagonal were used when considering the
trace of the covariance matrix of estimation error, since
only two of the three plant states were estimated. The
parameter becomes the value of the real part for the
remote pole and the expected result is a monotonic
increasing degradation as the pole location becomes
less remote. Figure 4-3 shows a typical graph of this
result.
d. MODEL ORDER REDUCED BY TWO
A numerical exeimple similar to c above was
simulated in which a fourth order type 0 system with two
complex conjugate pole pairs was reduced to a second
order filter type 0 model with the dominant pole pair
identified exactly. The sensitivity parameters were
taken as the damping factor ^ and natural frequency to
associated with the remote complex pole pair. The pole
locations for the plant were taken to be representative
of the short period and phugoid oscillatory modes in
the linearized model of an aircraft over a limited flight
regime. The interpretation would be to find the degra-
dation in estimation of altitude and altitude rate which
results from ignoring the short period vertical oscilla-
tions of the airframe produced by elevator perturbations
and air gusts. The results are shown in Figure 4-4.
The numerical values used for the accurate model were
wi = 1.15, i;i = .35, 0)2 = .073, ?2 = .035 (4-4)
49
100 %
0 ), - 0 )
— — X 100%
0)
Pig. 4-4 Degradation vs Parameter Errors
(4 parameter model, 2 incorrect)
50
while the erroneous model was taken as
w = .073, ^ = .035 (4-5)
The parameter was allowed to vary from .175 to .525,
with values of wi from .575 to 1.725.
51
CHAPTER 5
CONCLUSIONS
The principal result of this investigation has been
the derivation of an algorithm to replace the Kalman filter
gain calculation when errors in the model of the dynamics
of an observed system are known to exist. This algorithm
can be used in two ways: either as a means for producing
optimal estimates in a low order filter, or to determine
the cost of parameter mis-identif ication in terms of
estimation accuracy for some specific system. In the
first application, the reduction in computation time
associated with low order filtering is partially negated
by the requirement for making two additional calculations
at each iteration. Therefore such an application probably
would become profitable only if the system model order can
be reduced by two or more in the filter. It is felt
that the use of equations 3-18, 3-15, 3-20 and 3-21 with
various suboptimal gain sequences could assist in n\amer-
ous design studies. An example would be the study of fil-
ter performance degradation where a single filter model
is to be used with many plants, each having slightly
different parameters. Another example would be appli-
cation of a Kalman filter scheme to a system with
parameters which vary slowly with time.
Development of the recursive expressions for cal-
culating the actual covariance of estimation error, together
with the computer simulation to test their utility, has
52
revealed some interesting sidelights. Perhaps the most
significant of these is the difficulty in obtaining a
sensitivity function in the usual sense for other than the
scalar cases. The second order, two-parameter case
yields a set of four simultaneous non-linear matrix
equations from which the partial derivatives must be
produced. Thus, the sensitivity function approach was
abandoned in favor of the recursive solution of actual
degradation .
Another interesting result was the fact that, in the
particular numerical examples used in Chapter 4 the fil-
ter performance degradation was not nearly as great as
the authors had anticipated. The two-parameter degra-
dation contours shown in Figure 4-2 describe a bowl-
shaped surface in the "parameter error plane" as would
be expected. However, the surface has a relatively flat
bottom and allows considerable parameter error in certain
directions without exceeding a one per cent degradation
in performance. In view of the analytical results of the
scalar examples in Chapter 3, the gradient of this surface
near its minimum is considered to depend heavily on the
values chosen for and R. It is known that the ratio
f2/R greatly affects variance reduction in most tracking
filters. The higher this value, the greater will be
the variance reduction. The numerical ratio used in all
examples was 100 which is probably optimistic. A study
53
of the effect of this ratio on the gradieix», of the
surface of figure 4-2 might verify the foregoing
remarks .
From the examples of a low-order filter model, it
appears that the idea of second order dominance for more
complicated systems may have promise in certain digital
filter applications. Although each specific application
requires a simulation such as those in Chapter 4 , much
of the guesswork associated with exactly what constitutes
second order dominance can be eliminated, once the sim-
ulation is performed. This subject might warrant further
investigation to learn just how "remote" higher order
system poles must be, how the values R, and fl/R
affect estimation accuracy, etc.
A related effect of erroneous filter models noted
in this investigation was a significant increase in the
number of iterations required to achieve "steady state"
in the calculated covariance as model errors increased.
The filter "settling time" naturally depends heavily on
initialization of ^ (O/Oj and P(0/0), but dependence on
errors in the plant model can further aggravate the
situation. Filter settling time or "lock-on" can be
very critical in certain applications such as fire control
systems. This is another area which could be explored
further .
While the main objectives of this investigation
have been realized, the Kalman filter is far from a dead
64
issue. On the contrary, completion of this work has
served to open several new questions which can lead to
successful application of the theoretical concepts
embodied in optimal state estimation.
55
BIBLIOGRAPHY
1. Bode, H. W, , et al . A Simplified Derivation of
Linear Least Square Smoothing and Prediction
Theory, Proceedings of the IcR.E, April, 1950.
pp. 417-425.
2. Fagin, S. L, Recursive Linear Regression Theory,
Optimal Filter Theory, and Error Analysis of
Optimal Systems, IEEE Convention Record. 1964.
pp, 230-235.
3. Heffes, H. The Effect of Erroneous Models on the
Kalman Filter Response. Institute of Electrical and
Electronic Engineers , Transactions on Automatic
Control, AC-11, No 3, July 1966. pp. 541-543.
4. Jardine, F. D. Optimal Filter Design for Sampled
Data Systems with Illustrative Examples. Naval
Postgraduate School ,, M.S, Thesis J294, 1965.
5. Kalman, R, E. A New Approach to Linear Filtering
and Prediction Theory. Transactions of the ASME .
Journal of Basic Engineering. v, 82, March, 1960.
pp. 35-45,
6. Kalman, R. E., et. al - New Results in Linear Filtering
and Prediction Theory, Transactions of the ASME .
Journal of Basic Engineering. March, 1961. pp, 95-108.
7 . Lee , R - C . K Optimal Estimation, Identification
and Control , Cambridge, MIT Press, 1964.
8. Nishimura, T. On the a Priori Information in
Sequential Estimation Problems, Institute of
Electrical and Electronic Engineers , Transactions
on Automatic Control. V. AC-11, No, 2, April 1966.
pp, 197-204 o
56
INITIAL DISTRIBUTION LIST
No. Copies
1. Defense Documentation Center 20
Cameron Station
Alexandria, Virginia 22314
2 . Library 2
Naval Postgraduate School
Monterey, California
3. Naval Ship Systems Command 1
Department of the Navy
Washington, D.C. 20360
4. Prof. James S. Demetry 2
Department of Electrical Engineering
Naval Postgraduate School
Monterey, California
5. LCDR Larry B. Nofziger, USN 2
Naval Air Systems Command Headquarters
Washington, d.C. 20360
6. LT Gerald Lee Devins, USN 1
Patrol Squadron Forty-Seven
FPO San Francisco, California
7. S. R. Neal 1
Naval Ordnance Test Station
China Lake, California
8. Prof. Harold A. Titus 1
Department of Electrical Engineering
Naval Postgraduate School
Monterey, California
57
nNrT.ASSTFTF.n
S*curity CU—iftcMtion
DOCUMINT CONTROL DATA • RAD
cl— i/I— «on ol »< ah»*tmel «ti4 •nnatmtimn mmpi k* wMw «<» >*»«>» —ptt <>
1 . OaiOINATINO ACTIVITY
2m. NC^ONT tKCUNiTV C LAMlFICATlON
Naval Postgraduate School
Monterey, California
2 9.
3 . mPOfIT TITLE
KALMAN FILTER PERFORMANCE DEGRADATION WITH AN ERRONEOUS
PLANT MODEL
4 * OKKMI^TIVK NOTKt (Typm ot fpttrt amd IroImoIvo
Thesis
«. AUTHORW (Z.MI iMMi*. Mmt rum*, htlhml)
NOFZIGER, LARRY BLAYNE , Lieutenant Commander, USN
DEVINS, GERALD LEE, Lieutenant, USN
•. MRORT DATE
June 1967
7 a- TOTAL NO. or OAOCa
to. CONTRACT OR ORANT NO.
9m, ORI 4 INATON*S RKFORT NUMSSIRfi>
k, #nOJBCT NO.
c.
f S. mtmr 9m
to. A VAILASILITY/LIMITATION NOTICES
11. SUPFLEMCNTANY NOTES
12. •aONSOaiNO military activity’
Naval Postgraduate School
Monterey, California
13. AOSTNACT
This investigation is concerned with the effects of
employing a Kalman filter to estimate the states in a
system for which the mathematical model is inaccurate.
Consideration is given to both intentional and uninten-
tional mis-identif ication of parameters in the assumed
plant dynamics. An algorithm consisting of four matrix
equations is derived which yields the actual covariance
of estimation error when errors in the assumed model are
known. Depending upon the gain sequence used, the
derived equations can be used to either 1) produce optimal
estimates when errors are deliberate or 2) aid in the
determination of mis-identif ication costs in terms of
filter performance degradation if the relative accuracy
of parameter identification is known.
. Analytic examples of scalar cases are included, as
well as computer simulations for specific higher order
systems, including the employment of a second order
filter model with a fourth order plant.
OD
FOKM
1 JAN 44
1473
imrT.assTFTRn
S«cutity Clauification
UNCLASSIFIED
Security Classification
i
k